636 research outputs found
Seeded Graph Matching via Large Neighborhood Statistics
We study a well known noisy model of the graph isomorphism problem. In this
model, the goal is to perfectly recover the vertex correspondence between two
edge-correlated Erd\H{o}s-R\'{e}nyi random graphs, with an initial seed set of
correctly matched vertex pairs revealed as side information. For seeded
problems, our result provides a significant improvement over previously known
results. We show that it is possible to achieve the information-theoretic limit
of graph sparsity in time polynomial in the number of vertices . Moreover,
we show the number of seeds needed for exact recovery in polynomial-time can be
as low as in the sparse graph regime (with the average degree
smaller than ) and in the dense graph regime.
Our results also shed light on the unseeded problem. In particular, we give
sub-exponential time algorithms for sparse models and an
algorithm for dense models for some parameters, including some that are not
covered by recent results of Barak et al
How to Couple from the Past Using a Read-Once Source of Randomness
We give a new method for generating perfectly random samples from the
stationary distribution of a Markov chain. The method is related to coupling
from the past (CFTP), but only runs the Markov chain forwards in time, and
never restarts it at previous times in the past. The method is also related to
an idea known as PASTA (Poisson arrivals see time averages) in the operations
research literature. Because the new algorithm can be run using a read-once
stream of randomness, we call it read-once CFTP. The memory and time
requirements of read-once CFTP are on par with the requirements of the usual
form of CFTP, and for a variety of applications the requirements may be
noticeably less. Some perfect sampling algorithms for point processes are based
on an extension of CFTP known as coupling into and from the past; for
completeness, we give a read-once version of coupling into and from the past,
but it remains unpractical. For these point process applications, we give an
alternative coupling method with which read-once CFTP may be efficiently used.Comment: 28 pages, 2 figure
On the discrepancy of random low degree set systems
Motivated by the celebrated Beck-Fiala conjecture, we consider the random
setting where there are elements and sets and each element lies in
randomly chosen sets. In this setting, Ezra and Lovett showed an discrepancy bound in the regime when and an bound
when .
In this paper, we give a tight bound for the entire range of
and , under a mild assumption that . The
result is based on two steps. First, applying the partial coloring method to
the case when and using the properties of the random set
system we show that the overall discrepancy incurred is at most .
Second, we reduce the general case to that of using LP
duality and a careful counting argument
Hamilton Cycles in a Class of Random Directed Graphs
AbstractWe prove that almost every 3-in, 3-out digraph is Hamiltonian
Exactly solvable model with two conductor-insulator transitions driven by impurities
We present an exact analysis of two conductor-insulator transitions in the
random graph model. The average connectivity is related to the concentration of
impurities. The adjacency matrix of a large random graph is used as a hopping
Hamiltonian. Its spectrum has a delta peak at zero energy. Our analysis is
based on an explicit expression for the height of this peak, and a detailed
description of the localized eigenvectors and of their contribution to the
peak. Starting from the low connectivity (high impurity density) regime, one
encounters an insulator-conductor transition for average connectivity
1.421529... and a conductor-insulator transition for average connectivity
3.154985.... We explain the spectral singularity at average connectivity
e=2.718281... and relate it to another enumerative problem in random graph
theory, the minimal vertex cover problem.Comment: 4 pages revtex, 2 fig.eps [v2: new title, changed intro, reorganized
text
On the Approximability of Digraph Ordering
Given an n-vertex digraph D = (V, A) the Max-k-Ordering problem is to compute
a labeling maximizing the number of forward edges, i.e.
edges (u,v) such that (u) < (v). For different values of k, this
reduces to Maximum Acyclic Subgraph (k=n), and Max-Dicut (k=2). This work
studies the approximability of Max-k-Ordering and its generalizations,
motivated by their applications to job scheduling with soft precedence
constraints. We give an LP rounding based 2-approximation algorithm for
Max-k-Ordering for any k={2,..., n}, improving on the known
2k/(k-1)-approximation obtained via random assignment. The tightness of this
rounding is shown by proving that for any k={2,..., n} and constant
, Max-k-Ordering has an LP integrality gap of 2 -
for rounds of the
Sherali-Adams hierarchy.
A further generalization of Max-k-Ordering is the restricted maximum acyclic
subgraph problem or RMAS, where each vertex v has a finite set of allowable
labels . We prove an LP rounding based
approximation for it, improving on the
approximation recently given by Grandoni et al.
(Information Processing Letters, Vol. 115(2), Pages 182-185, 2015). In fact,
our approximation algorithm also works for a general version where the
objective counts the edges which go forward by at least a positive offset
specific to each edge.
The minimization formulation of digraph ordering is DAG edge deletion or
DED(k), which requires deleting the minimum number of edges from an n-vertex
directed acyclic graph (DAG) to remove all paths of length k. We show that
both, the LP relaxation and a local ratio approach for DED(k) yield
k-approximation for any .Comment: 21 pages, Conference version to appear in ESA 201
The decimation process in random k-SAT
Let F be a uniformly distributed random k-SAT formula with n variables and m
clauses. Non-rigorous statistical mechanics ideas have inspired a message
passing algorithm called Belief Propagation Guided Decimation for finding
satisfying assignments of F. This algorithm can be viewed as an attempt at
implementing a certain thought experiment that we call the Decimation Process.
In this paper we identify a variety of phase transitions in the decimation
process and link these phase transitions to the performance of the algorithm
Discordant voting processes on finite graphs
We consider an asynchronous voting process on graphs which we call discordant voting, and which can be described as follows. Initially each vertex holds one of two opinions, red or blue say. Neighbouring vertices with different opinions interact pairwise. After an interaction both vertices have the same colour. The quantity of interest is T, the time to reach consensus, i.e. the number of interactions needed for all vertices have the same colour. An edge whose endpoint colours differ (i.e. one vertex is coloured red and the other one blue) is said to be discordant. A vertex is discordant if its is incident with a discordant edge. In discordant voting, all interactions are based on discordant edges. Because the voting process is asynchronous there are several ways to update the colours of the interacting vertices. Push: Pick a random discordant vertex and push its colour to a random discordant neighbour. Pull: Pick a random discordant vertex and pull the colour of a random discordant neighbour. Oblivious: Pick a random endpoint of a random discordant edge and push the colour to the other end point. We show that ET, the expected time to reach consensus, depends strongly on the underlying graph and the update rule. For connected graphs on n vertices, and an initial half red, half blue colouring the following hold. For oblivious voting, ET = n2/4 independent of the underlying graph. For the complete graph Kn, the push protocol has ET = =(n log n), whereas the pull protocol has ET = =(2n). For the cycle Cn all three protocols have ET = =(n2). For the star graph however, the pull protocol has ET = O(n2), whereas the push protocol is slower with ET = =(n2 log n). The wide variation in ET for the pull protocol is to be contrasted with the well known model of synchronous pull voting, for which ET = O(n) on many classes of expanders
Wear Minimization for Cuckoo Hashing: How Not to Throw a Lot of Eggs into One Basket
We study wear-leveling techniques for cuckoo hashing, showing that it is
possible to achieve a memory wear bound of after the
insertion of items into a table of size for a suitable constant
using cuckoo hashing. Moreover, we study our cuckoo hashing method empirically,
showing that it significantly improves on the memory wear performance for
classic cuckoo hashing and linear probing in practice.Comment: 13 pages, 1 table, 7 figures; to appear at the 13th Symposium on
Experimental Algorithms (SEA 2014
- …